Enable AMD tests for ZCH & Fix OSS#5727
Open
Ali-Tehrani wants to merge 2 commits intopytorch:mainfrom
Open
Conversation
Summary: X-link: facebookresearch/FBGEMM#2647 Context --------- Running MPZCH on AMD hardware causes a CPU-to-GPU hang. This was identified to be the ZCH GPU kernel during the spin-wait loop that occurs. TorchRec creates metadata and sets its default values to -1. This is then provided to the FBGEMM kernel. Note if an identity slot is -1, then its metadata is also -1, and the metadata for that slot is never -1 again when it gets updated. Suppose thread A writes in slot i, and updates the identity slot with its item. Suppose thread B was late and is also trying to write in slot i. While the metadata is being updated from thread A, thread B runs a spin-wait loop waiting for the metadata to be updated. This creates many atomic calls just to see if the metadata was updated from thread A. AMD MI3X handles atomics differently than NVIDIA GPU. Due to this, the latency is incredibly slow, and atomic contention can occur when many threads are reading the atomic value . This ultimately looks like a hang that occurs. NOTE: this only impacts when identities are first inserted when their metadata is -1, and explains why the hang occurs during the first few batches. WIP to reducing the amount of atomics in ZCH. Implementation ------------------ Added a spin counter (SC ) to the spin-wait loop in check_evict and check_min. The value is determined through benchmarking Reviewed By: kaanbaloglu Differential Revision: D102424864
Summary: Context --------- Both `faster_hash.cu` and `faster_hash.cpp` are removed from the OSS build if ROCm is used, and the tests within `faster_hash_test` are skipped as well. Just removing these guards for `faster_hash.cpp` don't [work](https://github.com/pytorch/FBGEMM/actions/runs/25284083884/job/74125701967) , since `PackedTensorAccessor64::operator[]` only works on __device__ only. Implementation ------------------ - Removed skipIfROCm guards wihtin tests - Removed ROCm guard in FBGEMM cmake build - on cpp file changed PackedTensorAccessor to TensorAcessor Differential Revision: D103694673
Contributor
|
@Ali-Tehrani has exported this pull request. If you are a Meta employee, you can view the originating Diff in D103694673. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary:
Context
Both
faster_hash.cuandfaster_hash.cppare removed from the OSS build if ROCm is used, and the tests withinfaster_hash_testare skipped as well.Just removing these guards for
faster_hash.cppdon't work , sincePackedTensorAccessor64::operator[]only works on device only.Implementation
Differential Revision: D103694673